智能家庭中使用的技术有所改善,以便从反馈中了解用户偏好,以便为用户提供便利。大多数智能家庭学习统一的模型,以表示当乘员池包括不同年龄,性别和地点的人时通常失败的用户的热偏好。对于每个用户来说具有不同的热敏感觉对智能家庭构成挑战,以便在不忘记他人的政策的情况下为每个乘员学习个性化偏好。当具有不同偏好的新用户集成在家中时,具有单个最佳政策的智能家庭可能无法提供舒适性。在本文中,我们提出了一种贝母,一种贝叶斯增强学习算法,可以使用其热偏好来近似当前可观察环境中的当前乘员状态,然后决定它是新的占用者还是属于先前观察到的用户的池。然后,我们将POSHS算法与基于LSTM的算法进行比较,用于学习和估计乘员的当前状态,同时还采用最佳动作来减少设置偏好所需的时间。我们根据等级加固学习,使用高达5种模拟人类模型进行这些实验。结果表明,豪华可以从其温度和湿度偏好地近似当前用户状态,并且还减少了在智能家庭存在下通过人体模型设定最佳温度和湿度所需的时间步长的数量。
translated by 谷歌翻译
We introduce a machine-learning (ML)-based weather simulator--called "GraphCast"--which outperforms the most accurate deterministic operational medium-range weather forecasting system in the world, as well as all previous ML baselines. GraphCast is an autoregressive model, based on graph neural networks and a novel high-resolution multi-scale mesh representation, which we trained on historical weather data from the European Centre for Medium-Range Weather Forecasts (ECMWF)'s ERA5 reanalysis archive. It can make 10-day forecasts, at 6-hour time intervals, of five surface variables and six atmospheric variables, each at 37 vertical pressure levels, on a 0.25-degree latitude-longitude grid, which corresponds to roughly 25 x 25 kilometer resolution at the equator. Our results show GraphCast is more accurate than ECMWF's deterministic operational forecasting system, HRES, on 90.0% of the 2760 variable and lead time combinations we evaluated. GraphCast also outperforms the most accurate previous ML-based weather forecasting model on 99.2% of the 252 targets it reported. GraphCast can generate a 10-day forecast (35 gigabytes of data) in under 60 seconds on Cloud TPU v4 hardware. Unlike traditional forecasting methods, ML-based forecasting scales well with data: by training on bigger, higher quality, and more recent data, the skill of the forecasts can improve. Together these results represent a key step forward in complementing and improving weather modeling with ML, open new opportunities for fast, accurate forecasting, and help realize the promise of ML-based simulation in the physical sciences.
translated by 谷歌翻译
Abstractive summarization has enjoyed renewed interest in recent years, thanks to pre-trained language models and the availability of large-scale datasets. Despite promising results, current models still suffer from generating factually inconsistent summaries, reducing their utility for real-world application. Several recent efforts attempt to address this by devising models that automatically detect factual inconsistencies in machine generated summaries. However, they focus exclusively on English, a language with abundant resources. In this work, we leverage factual consistency evaluation models to improve multilingual summarization. We explore two intuitive approaches to mitigate hallucinations based on the signal provided by a multilingual NLI model, namely data filtering and controlled generation. Experimental results in the 45 languages from the XLSum dataset show gains over strong baselines in both automatic and human evaluation.
translated by 谷歌翻译
We consider the problem of automatically generating stories in multiple languages. Compared to prior work in monolingual story generation, crosslingual story generation allows for more universal research on story planning. We propose to use Prompting Large Language Models with Plans to study which plan is optimal for story generation. We consider 4 types of plans and systematically analyse how the outputs differ for different planning strategies. The study demonstrates that formulating the plans as question-answer pairs leads to more coherent generated stories while the plan gives more control to the story creators.
translated by 谷歌翻译
Climate change is causing the intensification of rainfall extremes. Precipitation projections with high spatial resolution are important for society to prepare for these changes, e.g. to model flooding impacts. Physics-based simulations for creating such projections are very computationally expensive. This work demonstrates the effectiveness of diffusion models, a form of deep generative models, for generating much more cheaply realistic high resolution rainfall samples for the UK conditioned on data from a low resolution simulation. We show for the first time a machine learning model that is able to produce realistic samples of high-resolution rainfall based on a physical model that resolves atmospheric convection, a key process behind extreme rainfall. By adding self-learnt, location-specific information to low resolution relative vorticity, quantiles and time-mean of the samples match well their counterparts from the high-resolution simulation.
translated by 谷歌翻译
Federated learning (FL) has emerged as an instance of distributed machine learning paradigm that avoids the transmission of data generated on the users' side. Although data are not transmitted, edge devices have to deal with limited communication bandwidths, data heterogeneity, and straggler effects due to the limited computational resources of users' devices. A prominent approach to overcome such difficulties is FedADMM, which is based on the classical two-operator consensus alternating direction method of multipliers (ADMM). The common assumption of FL algorithms, including FedADMM, is that they learn a global model using data only on the users' side and not on the edge server. However, in edge learning, the server is expected to be near the base station and have direct access to rich datasets. In this paper, we argue that leveraging the rich data on the edge server is much more beneficial than utilizing only user datasets. Specifically, we show that the mere application of FL with an additional virtual user node representing the data on the edge server is inefficient. We propose FedTOP-ADMM, which generalizes FedADMM and is based on a three-operator ADMM-type technique that exploits a smooth cost function on the edge server to learn a global model parallel to the edge devices. Our numerical experiments indicate that FedTOP-ADMM has substantial gain up to 33\% in communication efficiency to reach a desired test accuracy with respect to FedADMM, including a virtual user on the edge server.
translated by 谷歌翻译
大数据和深度学习的结合是一项破坏世界的技术,如果正确使用,可以极大地影响任何目标。随着深度学习技术中大量医疗保健数据集和进步的可用性,系统现在可以很好地预测任何健康问题的未来趋势。从文献调查中,我们发现SVM用于预测心力衰竭的情况,而无需关联客观因素。利用电子健康记录(EHR)中重要历史信息的强度,我们利用长期记忆(LSTM)建立了一个智能和预测的模型,并根据该健康记录预测心力衰竭的未来趋势。因此,这项工作的基本承诺是使用基于患者的电子药用信息的LSTM来预测心脏的失败。我们已经分析了一个数据集,该数据集包含在Faisalabad心脏病学研究所和Faisalabad(巴基斯坦旁遮普邦)的盟军医院收集的299例心力衰竭患者的病历。这些患者由105名女性和194名男性组成,年龄在40岁和95岁之间。该数据集包含13个功能,这些功能报告了负责心力衰竭的临床,身体和生活方式信息。我们发现我们的分析趋势越来越多,这将有助于促进心中预测领域的知识。
translated by 谷歌翻译
我们提出了一种简单而有效的方法,用于培训命名实体识别(NER)模型,该模型在业务电话交易记录上运行,该转录本包含噪音,这是由于口语对话的性质和自动语音识别的工件。我们首先通过有限数量的成绩单微调卢克(Luke),这是一种最先进的命名实体识别(NER)模型弱标记的数据和少量的人类注销数据。该模型可以达到高精度,同时还满足了将包含在商业电话产品中的实际限制:在具有成本效益的CPU而不是GPU上部署时实时性能。
translated by 谷歌翻译
近年来,多任务学习在各种应用程序中都取得了巨大的成功。尽管这些年来,单个模型培训已承诺取得出色的成果,但它忽略了有价值的信息,这些信息可能有助于我们更好地估计一个指标。在与学习相关的任务下,多任务学习能够更好地概括模型。我们试图通过在相关任务和归纳转移学习之间共享功能来增强多任务模型的功能映射。此外,我们的兴趣是学习各种任务之间的任务关系,以从多任务学习中获得更好的收益。在本章中,我们的目标是可视化现有的多任务模型,比较其性能,用于评估多任务模型性能的方法,讨论在各个领域的设计和实施过程中所面临的问题,以及他们实现的优势和里程碑
translated by 谷歌翻译
弥补联邦学习(FL)模型的分散培训中所涉及的成本的激励措施是客户长期参与的关键刺激。但是,由于缺乏以下信息,请说服客户在FL上进行质量参与:(i)有关客户数据质量和属性的完整信息; (ii)客户数据贡献的价值; (iii)货币奖励优惠的可信赖机制。这通常会导致培训和沟通效率较差。尽管有几项工作着重于战略激励设计和客户选择以克服这个问题,但就针对预见的数字经济(包括Web 3.0)量身定制的总体设计存在一个重大的知识差距,同时同时实现了学习目标。为了解决这一差距,我们提出了一个基于贡献的令牌化激励方案,即\ texttt {fedToken},并得到区块链技术的支持,可确保在模型培训期间与其数据估值相对应的客户之间的公平分配。利用工程设计的基于Shapley的计划,我们首先近似模型聚合过程中本地模型的贡献,然后战略性地安排客户降低沟通循环的融合和锚定方式,以分配\ emph {负担得起的}代币在受限的货币预算下。广泛的模拟证明了我们提出的方法的功效。
translated by 谷歌翻译